19 research outputs found

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets

    What is ‘anti’ about anti-reaches? Reference frames selectively affect reaction times and endpoint variability

    Get PDF
    Reach movement planning involves the representation of spatial target information in different reference frames. Neurons at parietal and premotor stages of the cortical sensorimotor system represent target information in eye- or hand-centered reference frames, respectively. How the different neuronal representations affect behavioral parameters of motor planning and control, i.e. which stage of neural representation is relevant for which aspect of behavior, is not obvious from the physiology. Here, we test with a behavioral experiment if different kinematic movement parameters are affected to a different degree by either an eye- or hand-reference frame. We used a generalized anti-reach task to test the influence of stimulus-response compatibility (SRC) in eye- and hand-reference frames on reach reaction times, movement times, and endpoint variability. While in a standard anti-reach task, the SRC is identical in the eye- and hand-reference frames, we could separate SRC for the two reference frames. We found that reaction times were influenced by the SRC in eye- and hand-reference frame. In contrast, movement times were only influenced by the SRC in hand-reference frame, and endpoint variability was only influenced by the SRC in eye-reference frame. Since movement time and endpoint variability are the result of planning and control processes, while reaction times are consequences of only the planning process, we suggest that SRC effects on reaction times are highly suited to investigate reference frames of movement planning, and that eye- and hand-reference frames have distinct effects on different phases of motor action and different kinematic movement parameters

    Gaze fixation improves the stability of expert juggling

    Get PDF
    Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain

    A Functional and Structural Investigation of the Human Fronto-Basal Volitional Saccade Network

    Get PDF
    Almost all cortical areas are connected to the subcortical basal ganglia (BG) through parallel recurrent inhibitory and excitatory loops, exerting volitional control over automatic behavior. As this model is largely based on non-human primate research, we used high resolution functional MRI and diffusion tensor imaging (DTI) to investigate the functional and structural organization of the human (pre)frontal cortico-basal network controlling eye movements. Participants performed saccades in darkness, pro- and antisaccades and observed stimuli during fixation. We observed several bilateral functional subdivisions along the precentral sulcus around the human frontal eye fields (FEF): a medial and lateral zone activating for saccades in darkness, a more fronto-medial zone preferentially active for ipsilateral antisaccades, and a large anterior strip along the precentral sulcus activating for visual stimulus presentation during fixation. The supplementary eye fields (SEF) were identified along the medial wall containing all aforementioned functions. In the striatum, the BG area receiving almost all cortical input, all saccade related activation was observed in the putamen, previously considered a skeletomotor striatal subdivision. Activation elicited by the cue instructing pro or antisaccade trials was clearest in the medial FEF and right putamen. DTI fiber tracking revealed that the subdivisions of the human FEF complex are mainly connected to the putamen, in agreement with the fMRI findings. The present findings demonstrate that the human FEF has functional subdivisions somewhat comparable to non-human primates. However, the connections to and activation in the human striatum preferentially involve the putamen, not the caudate nucleus as is reported for monkeys. This could imply that fronto-striatal projections for the oculomotor system are fundamentally different between humans and monkeys. Alternatively, there could be a bias in published reports of monkey studies favoring the caudate nucleus over the putamen in the search for oculomotor functions

    Enhancing sensorimotor activity by controlling virtual objects with gaze

    No full text
    This fMRI work studies brain activity of healthy volunteers who manipulated a virtual object in the context of a digital game by applying two different control methods: using their right hand or using their gaze. The results show extended activations in sensorimotor areas, not only when participants played in the traditional way (using their hand) but also when they used their gaze to control the virtual object. Furthermore, with the exception of the primary motor cortex, regional motor activity was similar regardless of what the effector was: the arm or the eye. These results have a potential application in the field of the neurorehabilitation as a new approach to generate activation of the sensorimotor system to support the recovery of the motor functions

    Sensory transformations and the use of multiple reference frames for reach planning

    No full text
    The sensory signals that drive movement planning arrive in a variety of “reference frames”, so integrating or comparing them requires sensory transformations. We propose a model where the statistical properties of sensory signals and their transformations determine how these signals are used. This model captures the patterns of gaze-dependent errors found in our human psychophysics experiment when the sensory signals available for reach planning are varied. These results challenge two widely held ideas: error patterns directly reflect the reference frame of the underlying neural representation, and it is preferable to use a single common reference frame for movement planning. We show that gaze-dependent error patterns, often cited as evidence for retinotopic reach planning, can be explained by a transformation bias and are not exclusively linked to retinotopic representations. Further, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals
    corecore